8 research outputs found
How To Overcome Confirmation Bias in Semi-Supervised Image Classification By Active Learning
Do we need active learning? The rise of strong deep semi-supervised methods
raises doubt about the usability of active learning in limited labeled data
settings. This is caused by results showing that combining semi-supervised
learning (SSL) methods with a random selection for labeling can outperform
existing active learning (AL) techniques. However, these results are obtained
from experiments on well-established benchmark datasets that can overestimate
the external validity. However, the literature lacks sufficient research on the
performance of active semi-supervised learning methods in realistic data
scenarios, leaving a notable gap in our understanding. Therefore we present
three data challenges common in real-world applications: between-class
imbalance, within-class imbalance, and between-class similarity. These
challenges can hurt SSL performance due to confirmation bias. We conduct
experiments with SSL and AL on simulated data challenges and find that random
sampling does not mitigate confirmation bias and, in some cases, leads to worse
performance than supervised learning. In contrast, we demonstrate that AL can
overcome confirmation bias in SSL in these realistic settings. Our results
provide insights into the potential of combining active and semi-supervised
learning in the presence of common real-world challenges, which is a promising
direction for robust methods when learning with limited labeled data in
real-world applications.Comment: Accepted @ ECML PKDD 2023. This is the author's version of the work.
The definitive Version of Record will be published in the Proceedings of ECML
PKDD 202
Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias
The one-sided focus on English in previous studies of gender bias in NLP
misses out on opportunities in other languages: English challenge datasets such
as GAP and WinoGender highlight model preferences that are "hallucinatory",
e.g., disambiguating gender-ambiguous occurrences of 'doctor' as male doctors.
We show that for languages with type B reflexivization, e.g., Swedish and
Russian, we can construct multi-task challenge datasets for detecting gender
bias that lead to unambiguously wrong model predictions: In these languages,
the direct translation of 'the doctor removed his mask' is not ambiguous
between a coreferential reading and a disjoint reading. Instead, the
coreferential reading requires a non-gendered pronoun, and the gendered,
possessive pronouns are anti-reflexive. We present a multilingual, multi-task
challenge dataset, which spans four languages and four NLP tasks and focuses
only on this phenomenon. We find evidence for gender bias across all
task-language combinations and correlate model bias with national labor market
statistics.Comment: To appear in EMNLP 202
The Danish Gigaword Project
Danish is a North Germanic/Scandinavian language spoken primarily in Denmark,
a country with a tradition of technological and scientific innovation. However,
from a technological perspective, the Danish language has received relatively
little attention and, as a result, Danish language technology is hard to
develop, in part due to a lack of large or broad-coverage Danish corpora. This
paper describes the Danish Gigaword project, which aims to construct a
freely-available one billion word corpus of Danish text that represents the
breadth of the written language